Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
1.
Sci Rep ; 14(1): 9644, 2024 04 26.
Artículo en Inglés | MEDLINE | ID: mdl-38671059

RESUMEN

Assessing the individual risk of Major Adverse Cardiac Events (MACE) is of major importance as cardiovascular diseases remain the leading cause of death worldwide. Quantitative Myocardial Perfusion Imaging (MPI) parameters such as stress Myocardial Blood Flow (sMBF) or Myocardial Flow Reserve (MFR) constitutes the gold standard for prognosis assessment. We propose a systematic investigation of the value of Artificial Intelligence (AI) to leverage [ 82 Rb] Silicon PhotoMultiplier (SiPM) PET MPI for MACE prediction. We establish a general pipeline for AI model validation to assess and compare the performance of global (i.e. average of the entire MPI signal), regional (17 segments), radiomics and Convolutional Neural Network (CNN) models leveraging various MPI signals on a dataset of 234 patients. Results showed that all regional AI models significantly outperformed the global model ( p < 0.001 ), where the best AUC of 73.9% (CI 72.5-75.3) was obtained with a CNN model. A regional AI model based on MBF averages from 17 segments fed to a Logistic Regression (LR) constituted an excellent trade-off between model simplicity and performance, achieving an AUC of 73.4% (CI 72.3-74.7). A radiomics model based on intensity features revealed that the global average was the least important feature when compared to other aggregations of the MPI signal over the myocardium. We conclude that AI models can allow better personalized prognosis assessment for MACE.


Asunto(s)
Imagen de Perfusión Miocárdica , Tomografía de Emisión de Positrones , Humanos , Imagen de Perfusión Miocárdica/métodos , Femenino , Masculino , Tomografía de Emisión de Positrones/métodos , Persona de Mediana Edad , Anciano , Inteligencia Artificial , Radioisótopos de Rubidio , Pronóstico , Redes Neurales de la Computación , Enfermedades Cardiovasculares/diagnóstico por imagen , Enfermedades Cardiovasculares/diagnóstico , Circulación Coronaria
2.
Radiology ; 310(2): e231319, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38319168

RESUMEN

Filters are commonly used to enhance specific structures and patterns in images, such as vessels or peritumoral regions, to enable clinical insights beyond the visible image using radiomics. However, their lack of standardization restricts reproducibility and clinical translation of radiomics decision support tools. In this special report, teams of researchers who developed radiomics software participated in a three-phase study (September 2020 to December 2022) to establish a standardized set of filters. The first two phases focused on finding reference filtered images and reference feature values for commonly used convolutional filters: mean, Laplacian of Gaussian, Laws and Gabor kernels, separable and nonseparable wavelets (including decomposed forms), and Riesz transformations. In the first phase, 15 teams used digital phantoms to establish 33 reference filtered images of 36 filter configurations. In phase 2, 11 teams used a chest CT image to derive reference values for 323 of 396 features computed from filtered images using 22 filter and image processing configurations. Reference filtered images and feature values for Riesz transformations were not established. Reproducibility of standardized convolutional filters was validated on a public data set of multimodal imaging (CT, fluorodeoxyglucose PET, and T1-weighted MRI) in 51 patients with soft-tissue sarcoma. At validation, reproducibility of 486 features computed from filtered images using nine configurations × three imaging modalities was assessed using the lower bounds of 95% CIs of intraclass correlation coefficients. Out of 486 features, 458 were found to be reproducible across nine teams with lower bounds of 95% CIs of intraclass correlation coefficients greater than 0.75. In conclusion, eight filter types were standardized with reference filtered images and reference feature values for verifying and calibrating radiomics software packages. A web-based tool is available for compliance checking.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Radiómica , Humanos , Reproducibilidad de los Resultados , Biomarcadores , Imagen Multimodal
3.
Med Image Anal ; 90: 102972, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37742374

RESUMEN

By focusing on metabolic and morphological tissue properties respectively, FluoroDeoxyGlucose (FDG)-Positron Emission Tomography (PET) and Computed Tomography (CT) modalities include complementary and synergistic information for cancerous lesion delineation and characterization (e.g. for outcome prediction), in addition to usual clinical variables. This is especially true in Head and Neck Cancer (HNC). The goal of the HEad and neCK TumOR segmentation and outcome prediction (HECKTOR) challenge was to develop and compare modern image analysis methods to best extract and leverage this information automatically. We present here the post-analysis of HECKTOR 2nd edition, at the 24th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2021. The scope of the challenge was substantially expanded compared to the first edition, by providing a larger population (adding patients from a new clinical center) and proposing an additional task to the challengers, namely the prediction of Progression-Free Survival (PFS). To this end, the participants were given access to a training set of 224 cases from 5 different centers, each with a pre-treatment FDG-PET/CT scan and clinical variables. Their methods were subsequently evaluated on a held-out test set of 101 cases from two centers. For the segmentation task (Task 1), the ranking was based on a Borda counting of their ranks according to two metrics: mean Dice Similarity Coefficient (DSC) and median Hausdorff Distance at 95th percentile (HD95). For the PFS prediction task, challengers could use the tumor contours provided by experts (Task 3) or rely on their own (Task 2). The ranking was obtained according to the Concordance index (C-index) calculated on the predicted risk scores. A total of 103 teams registered for the challenge, for a total of 448 submissions and 29 papers. The best method in the segmentation task obtained an average DSC of 0.759, and the best predictions of PFS obtained a C-index of 0.717 (without relying on the provided contours) and 0.698 (using the expert contours). An interesting finding was that best PFS predictions were reached by relying on DL approaches (with or without explicit tumor segmentation, 4 out of the 5 best ranked) compared to standard radiomics methods using handcrafted features extracted from delineated tumors, and by exploiting alternative tumor contours (automated and/or larger volumes encompassing surrounding tissues) rather than relying on the expert contours. This second edition of the challenge confirmed the promising performance of fully automated primary tumor delineation in PET/CT images of HNC patients, although there is still a margin for improvement in some difficult cases. For the first time, the prediction of outcome was also addressed and the best methods reached relatively good performance (C-index above 0.7). Both results constitute another step forward toward large-scale outcome prediction studies in HNC.

4.
Head Neck Tumor Chall (2022) ; 13626: 1-30, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37195050

RESUMEN

This paper presents an overview of the third edition of the HEad and neCK TumOR segmentation and outcome prediction (HECKTOR) challenge, organized as a satellite event of the 25th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) 2022. The challenge comprises two tasks related to the automatic analysis of FDG-PET/CT images for patients with Head and Neck cancer (H&N), focusing on the oropharynx region. Task 1 is the fully automatic segmentation of H&N primary Gross Tumor Volume (GTVp) and metastatic lymph nodes (GTVn) from FDG-PET/CT images. Task 2 is the fully automatic prediction of Recurrence-Free Survival (RFS) from the same FDG-PET/CT and clinical data. The data were collected from nine centers for a total of 883 cases consisting of FDG-PET/CT images and clinical information, split into 524 training and 359 test cases. The best methods obtained an aggregated Dice Similarity Coefficient (DSCagg) of 0.788 in Task 1, and a Concordance index (C-index) of 0.682 in Task 2.

5.
Artif Intell Rev ; 56(4): 3473-3504, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36092822

RESUMEN

Since its emergence in the 1960s, Artificial Intelligence (AI) has grown to conquer many technology products and their fields of application. Machine learning, as a major part of the current AI solutions, can learn from the data and through experience to reach high performance on various tasks. This growing success of AI algorithms has led to a need for interpretability to understand opaque models such as deep neural networks. Various requirements have been raised from different domains, together with numerous tools to debug, justify outcomes, and establish the safety, fairness and reliability of the models. This variety of tasks has led to inconsistencies in the terminology with, for instance, terms such as interpretable, explainable and transparent being often used interchangeably in methodology papers. These words, however, convey different meanings and are "weighted" differently across domains, for example in the technical and social sciences. In this paper, we propose an overarching terminology of interpretability of AI systems that can be referred to by the technical developers as much as by the social sciences community to pursue clarity and efficiency in the definition of regulations for ethical and reliable AI development. We show how our taxonomy and definition of interpretable AI differ from the ones in previous research and how they apply with high versatility to several domains and use cases, proposing a-highly needed-standard for the communication among interdisciplinary areas of AI.

6.
Semin Radiat Oncol ; 32(4): 319-329, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-36202435

RESUMEN

Autosegmentation of gross tumor volumes holds promise to decrease clinical demand and to provide consistency across clinicians and institutions for radiation treatment planning. Additionally, autosegmentation can enable imaging analyses such as radiomics to construct and deploy large studies with thousands of patients. Here, we review modern results that utilize deep learning approaches to segment tumors in 5 major clinical sites: brain, head and neck, thorax, abdomen, and pelvis. We focus on approaches that inch closer to clinical adoption, highlighting winning entries in international competitions, unique network architectures, and novel ways of overcoming specific challenges. We also broadly discuss the future of gross tumor volumes autosegmentation and the remaining barriers that must be overcome before widespread replacement or augmentation of manual contouring.


Asunto(s)
Neoplasias , Oncología por Radiación , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Neoplasias/diagnóstico por imagen , Neoplasias/radioterapia , Planificación de la Radioterapia Asistida por Computador/métodos
7.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 4731-4735, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-36086273

RESUMEN

The prediction of cancer characteristics, treatment planning and patient outcome from medical images generally requires tumor delineation. In Head and Neck cancer (H&N), the automatic segmentation and differentiation of primary Gross Tumor Volumes (GTVt) and malignant lymph nodes (GTVn) is a necessary step for large-scale radiomics studies to predict patient outcome such as Progression Free Survival (PFS). Detecting malignant lymph nodes is also a crucial step for Tumor-Node-Metastases (TNM) staging and to support the decision to resect the nodes. In turn, automatic TNM staging and patient outcome prediction can greatly benefit patient care by helping clinicians to find the best personalized treatment. We propose the first model to automatically individually segment GTVt and GTVn in PET/CT images. A bi-modal 3D U-Net model is trained for multi-class and multi-components segmentation on the multi-centric HECKTOR 2020 dataset containing 254 cases. The dataset has been specifically re-annotated by experts to obtain ground truth GTVn contours. The results show promising segmentation performance for the automation of radiomics pipelines and their validation on large-scale studies for which manual annotations are not available. An average test Dice Similarity Coefficients (DSC) of 0.717 is obtained for the segmentation of GTVt. The GTVn segmentation is evaluated with an aggregated DSC to account for the cases without GTVn, which is estimated at 0.729 on the test set.


Asunto(s)
Neoplasias de Cabeza y Cuello , Tomografía Computarizada por Tomografía de Emisión de Positrones , Neoplasias de Cabeza y Cuello/diagnóstico por imagen , Humanos , Ganglios Linfáticos/diagnóstico por imagen
8.
Clin Transl Radiat Oncol ; 33: 153-158, 2022 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-35243026

RESUMEN

A vast majority of studies in the radiomics field are based on contours originating from radiotherapy planning. This kind of delineation (e.g. Gross Tumor Volume, GTV) is often larger than the true tumoral volume, sometimes including parts of other organs (e.g. trachea in Head and Neck, H&N studies) and the impact of such over-segmentation was little investigated so far. In this paper, we propose to evaluate and compare the performance between models using two contour types: those from radiotherapy planning, and those specifically delineated for radiomics studies. For the latter, we modified the radiotherapy contours to fit the true tumoral volume. The two contour types were compared when predicting Progression-Free Survival (PFS) using Cox models based on radiomics features extracted from FluoroDeoxyGlucose-Positron Emission Tomography (FDG-PET) and CT images of 239 patients with oropharyngeal H&N cancer collected from five centers, the data from the 2020 HECKTOR challenge. Using Dedicated contours demonstrated better performance for predicting PFS, where Harell's concordance indices of 0.61 and 0.69 were achieved for Radiotherapy and Dedicated contours, respectively. Using automatically Resegmented contours based on a fixed intensity range was associated with a C-index of 0.63. These results illustrate the importance of using clean dedicated contours that are close to the true tumoral volume in radiomics studies, even when tumor contours are already available from radiotherapy treatment planning.

9.
Med Image Anal ; 77: 102336, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-35016077

RESUMEN

This paper relates the post-analysis of the first edition of the HEad and neCK TumOR (HECKTOR) challenge. This challenge was held as a satellite event of the 23rd International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2020, and was the first of its kind focusing on lesion segmentation in combined FDG-PET and CT image modalities. The challenge's task is the automatic segmentation of the Gross Tumor Volume (GTV) of Head and Neck (H&N) oropharyngeal primary tumors in FDG-PET/CT images. To this end, the participants were given a training set of 201 cases from four different centers and their methods were tested on a held-out set of 53 cases from a fifth center. The methods were ranked according to the Dice Score Coefficient (DSC) averaged across all test cases. An additional inter-observer agreement study was organized to assess the difficulty of the task from a human perspective. 64 teams registered to the challenge, among which 10 provided a paper detailing their approach. The best method obtained an average DSC of 0.7591, showing a large improvement over our proposed baseline method and the inter-observer agreement, associated with DSCs of 0.6610 and 0.61, respectively. The automatic methods proved to successfully leverage the wealth of metabolic and structural properties of combined PET and CT modalities, significantly outperforming human inter-observer agreement level, semi-automatic thresholding based on PET images as well as other single modality-based methods. This promising performance is one step forward towards large-scale radiomics studies in H&N cancer, obviating the need for error-prone and time-consuming manual delineation of GTVs.


Asunto(s)
Neoplasias de Cabeza y Cuello , Tomografía Computarizada por Tomografía de Emisión de Positrones , Fluorodesoxiglucosa F18 , Neoplasias de Cabeza y Cuello/diagnóstico por imagen , Humanos , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Tomografía de Emisión de Positrones/métodos , Carga Tumoral
10.
J Pers Med ; 11(9)2021 Aug 27.
Artículo en Inglés | MEDLINE | ID: mdl-34575619

RESUMEN

Radiomics converts medical images into mineable data via a high-throughput extraction of quantitative features used for clinical decision support. However, these radiomic features are susceptible to variation across scanners, acquisition protocols, and reconstruction settings. Various investigations have assessed the reproducibility and validation of radiomic features across these discrepancies. In this narrative review, we combine systematic keyword searches with prior domain knowledge to discuss various harmonization solutions to make the radiomic features more reproducible across various scanners and protocol settings. Different harmonization solutions are discussed and divided into two main categories: image domain and feature domain. The image domain category comprises methods such as the standardization of image acquisition, post-processing of raw sensor-level image data, data augmentation techniques, and style transfer. The feature domain category consists of methods such as the identification of reproducible features and normalization techniques such as statistical normalization, intensity harmonization, ComBat and its derivatives, and normalization using deep learning. We also reflect upon the importance of deep learning solutions for addressing variability across multi-centric radiomic studies especially using generative adversarial networks (GANs), neural style transfer (NST) techniques, or a combination of both. We cover a broader range of methods especially GANs and NST methods in more detail than previous reviews.

11.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1758-1761, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-33018338

RESUMEN

Using medical images recorded in clinical practice has the potential to be a game-changer in the application of machine learning for medical decision support. Thousands of medical images are produced in daily clinical activity. The diagnosis of medical doctors on these images represents a source of knowledge to train machine learning algorithms for scientific research or computer-aided diagnosis. However, the requirement of manual data annotations and the heterogeneity of images and annotations make it difficult to develop algorithms that are effective on images from different centers or sources (scanner manufacturers, protocols, etc.). The objective of this article is to explore the opportunities and the limits of highly heterogeneous biomedical data, since many medical data sets are small and entail a challenge for machine learning techniques. Particularly, we focus on a small data set targeting meningioma grading. Meningioma grading is crucial for patient treatment and prognosis. It is normally performed by histological examination but recent articles showed that it is possible to do it also on magnetic resonance images (MRI), so non-invasive. Our data set consists of 174 T1-weighted MRI images of patients with meningioma, divided into 126 benign and 48 atypical/anaplastic cases, acquired using 26 different MRI scanners and 125 acquisition protocols, which shows the enormous variability in the data set. The performed preprocessing steps include tumor segmentation, spatial image normalization and data augmentation based on color and affine transformations. The preprocessed cases are passed to a carefully trained 2-D convolutional neural network. Accuracy above 74% was obtained, with the high-grade tumor recall above 74%. The results are encouraging considering the limited size and high heterogeneity of the data set. The proposed methodology can be useful for other problems involving classification of small and highly heterogeneous data sets.


Asunto(s)
Neoplasias Meníngeas , Redes Neurales de la Computación , Humanos , Aprendizaje Automático , Imagen por Resonancia Magnética , Espectroscopía de Resonancia Magnética
12.
Med Image Anal ; 65: 101756, 2020 10.
Artículo en Inglés | MEDLINE | ID: mdl-32623274

RESUMEN

Locally Rotation Invariant (LRI) image analysis was shown to be fundamental in many applications and in particular in medical imaging where local structures of tissues occur at arbitrary rotations. LRI constituted the cornerstone of several breakthroughs in texture analysis, including Local Binary Patterns (LBP), Maximum Response 8 (MR8) and steerable filterbanks. Whereas globally rotation invariant Convolutional Neural Networks (CNN) were recently proposed, LRI was very little investigated in the context of deep learning. LRI designs allow learning filters accounting for all orientations, which enables a drastic reduction of trainable parameters and training data when compared to standard 3D CNNs. In this paper, we propose and compare several methods to obtain LRI CNNs with directional sensitivity. Two methods use orientation channels (responses to rotated kernels), either by explicitly rotating the kernels or using steerable filters. These orientation channels constitute a locally rotation equivariant representation of the data. Local pooling across orientations yields LRI image analysis. Steerable filters are used to achieve a fine and efficient sampling of 3D rotations as well as a reduction of trainable parameters and operations, thanks to a parametric representations involving solid Spherical Harmonics (SH),which are products of SH with associated learned radial profiles. Finally, we investigate a third strategy to obtain LRI based on rotational invariants calculated from responses to a learned set of solid SHs. The proposed methods are evaluated and compared to standard CNNs on 3D datasets including synthetic textured volumes composed of rotated patterns, and pulmonary nodule classification in CT. The results show the importance of LRI image analysis while resulting in a drastic reduction of trainable parameters, outperforming standard 3D CNNs trained with rotational data augmentation.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Diagnóstico por Imagen , Humanos
13.
Radiology ; 295(2): 328-338, 2020 05.
Artículo en Inglés | MEDLINE | ID: mdl-32154773

RESUMEN

Background Radiomic features may quantify characteristics present in medical imaging. However, the lack of standardized definitions and validated reference values have hampered clinical use. Purpose To standardize a set of 174 radiomic features. Materials and Methods Radiomic features were assessed in three phases. In phase I, 487 features were derived from the basic set of 174 features. Twenty-five research teams with unique radiomics software implementations computed feature values directly from a digital phantom, without any additional image processing. In phase II, 15 teams computed values for 1347 derived features using a CT image of a patient with lung cancer and predefined image processing configurations. In both phases, consensus among the teams on the validity of tentative reference values was measured through the frequency of the modal value and classified as follows: less than three matches, weak; three to five matches, moderate; six to nine matches, strong; 10 or more matches, very strong. In the final phase (phase III), a public data set of multimodality images (CT, fluorine 18 fluorodeoxyglucose PET, and T1-weighted MRI) from 51 patients with soft-tissue sarcoma was used to prospectively assess reproducibility of standardized features. Results Consensus on reference values was initially weak for 232 of 302 features (76.8%) at phase I and 703 of 1075 features (65.4%) at phase II. At the final iteration, weak consensus remained for only two of 487 features (0.4%) at phase I and 19 of 1347 features (1.4%) at phase II. Strong or better consensus was achieved for 463 of 487 features (95.1%) at phase I and 1220 of 1347 features (90.6%) at phase II. Overall, 169 of 174 features were standardized in the first two phases. In the final validation phase (phase III), most of the 169 standardized features could be excellently reproduced (166 with CT; 164 with PET; and 164 with MRI). Conclusion A set of 169 radiomics features was standardized, which enabled verification and calibration of different radiomics software. © RSNA, 2020 Online supplemental material is available for this article. See also the editorial by Kuhl and Truhn in this issue.


Asunto(s)
Biomarcadores/análisis , Procesamiento de Imagen Asistido por Computador/normas , Programas Informáticos , Calibración , Fluorodesoxiglucosa F18 , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Imagen por Resonancia Magnética , Fantasmas de Imagen , Fenotipo , Tomografía de Emisión de Positrones , Radiofármacos , Reproducibilidad de los Resultados , Sarcoma/diagnóstico por imagen , Tomografía Computarizada por Rayos X
14.
Artículo en Inglés | MEDLINE | ID: mdl-31508414

RESUMEN

One of the main obstacles for the implementation of deep convolutional neural networks (DCNNs) in the clinical pathology workflow is their low capability to overcome variability in slide preparation and scanner configuration, that leads to changes in tissue appearance. Some of these variations may not be not included in the training data, which means that the models have a risk to not generalize well. Addressing such variations and evaluating them in reproducible scenarios allows understanding of when the models generalize better, which is crucial for performance improvements and better DCNN models. Staining normalization techniques (often based on color deconvolution and deep learning) and color augmentation approaches have shown improvements in the generalization of the classification tasks for several tissue types. Domain-invariant training of DCNN's is also a promising technique to address the problem of training a single model for different domains, since it includes the source domain information to guide the training toward domain-invariant features, achieving state-of-the-art results in classification tasks. In this article, deep domain adaptation in convolutional networks (DANN) is applied to computational pathology and compared with widely used staining normalization and color augmentation methods in two challenging classification tasks. The classification tasks rely on two openly accessible datasets, targeting Gleason grading in prostate cancer, and mitosis classification in breast tissue. The benchmark of the different techniques and their combination in two DCNN architectures allows us to assess the generalization abilities and advantages of each method in the considered classification tasks. The code for reproducing our experiments and preprocessing the data is publicly available. Quantitative and qualitative results show that the use of DANN helps model generalization to external datasets. The combination of several techniques to manage color heterogeneity suggests that several methods together, such as color augmentation methods with DANN training, can generalize even further. The results do not show a single best technique among the considered methods, even when combining them. However, color augmentation and DANN training obtain most often the best results (alone or combined with color normalization and color augmentation). The statistical significance of the results and the embeddings visualizations provide useful insights to design DCNN that generalizes to unseen staining appearances. Furthermore, in this work, we release for the first time code for DANN evaluation in open access datasets for computational pathology. This work opens the possibility for further research on using DANN models together with techniques that can overcome the tissue preparation differences across datasets to tackle limited generalization.

15.
J Med Imaging (Bellingham) ; 6(2): 024008, 2019 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-31205978

RESUMEN

Radiomics has shown promising results in several medical studies, yet it suffers from a limited discrimination and informative capability as well as a high variation and correlation with the tomographic scanner types, pixel spacing, acquisition protocol, and reconstruction parameters. We propose and compare two methods to transform quantitative image features in order to improve their stability across varying image acquisition parameters while preserving the texture discrimination abilities. In this way, variations in extracted features are representative of true physiopathological tissue changes in the scanned patients. A first approach is based on a two-layer neural network that can learn a nonlinear standardization transformation of various types of features including handcrafted and deep features. Second, domain adversarial training is explored to increase the invariance of the transformed features to the scanner of origin. The generalization of the proposed approach to unseen textures and unseen scanners is demonstrated by a set of experiments using a publicly available computed tomography texture phantom dataset scanned with various imaging devices and parameters.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...